Bayesian Support Vector Regression Using a Unified Loss Function

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Unified Loss Function in Bayesian Framework for Support Vector Regression

In this paper, we propose a unified non-quadratic loss function for regression known as soft insensitive loss function (SILF). SILF is a flexible model and possesses most of the desirable characteristics of popular non-quadratic loss functions, such as Laplacian, Huber’s and Vapnik’s ε-insensitive loss function. We describe the properties of SILF and illustrate our assumption on the underlying ...

متن کامل

Bayesian Support Vector Regression

We show that the Bayesian evidence framework can be applied to both-support vector regression (-SVR) and-support vector regression (-SVR) algorithms. Standard SVR training can be regarded as performing level one inference of the evidence framework, while levels two and three allow automatic adjustments of the regularization and kernel parameters respectively, without the need of a validation set.

متن کامل

Robust Support Vector Regression with Flexible Loss Function

In the interest of deriving regressor that is robust to outliers, we propose a support vector regression (SVR) based on non-convex quadratic insensitive loss function with flexible coefficient and margin. The proposed loss function can be approximated by a difference of convex functions (DC). The resultant optimization is a DC program. We employ Newton’s method to solve it. The proposed model c...

متن کامل

Bayesian Inference in Support Vector Regression

................................................................................................................................ii Table of

متن کامل

Support Vector Regression with a Generalized Quadratic Loss

The standard SVR formulation for real-valued function approximation on multidimensional spaces is based on the -insensitive loss function, where errors are considered not correlated. Due to this, local information in the feature space which can be useful to improve the prediction model is disregarded. In this paper we address this problem by defining a generalized quadratic loss where the co-oc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Neural Networks

سال: 2004

ISSN: 1045-9227

DOI: 10.1109/tnn.2003.820830